随着商业光场(LF)摄像机的可用性,LF成像已成为计算摄影中的启动技术。然而,由于空间和角度信息的固有多路复用,在基于商业微杆的LF相机中,空间分辨率受到了显着限制。因此,它成为光场摄像头其他应用的主要瓶颈。本文提出了一个预处理的单图像超级分辨率(SISR)网络中的适应模块,以利用强大的SISR模型,而不是使用高度工程的光场成像域特异性超级分辨率模型。自适应模块由子光圈移位块和融合块组成。它是SISR网络中的一种适应性,可以进一步利用LF图像中的空间和角度信息以提高超级分辨率性能。实验验证表明,所提出的方法的表现优于现有的光场超级分辨率算法。与量表因子2的相同审计的SISR模型相比,所有数据集中的PSNR增益也超过1 dB,而PSNR对于量表因子4的增长率为0.6至1 dB。
translated by 谷歌翻译
Images with haze of different varieties often pose a significant challenge to dehazing. Therefore, guidance by estimates of haze parameters related to the variety would be beneficial and their progressive update jointly with haze reduction will allow effective dehazing. To this end, we propose a multi-network dehazing framework containing novel interdependent dehazing and haze parameter updater networks that operate in a progressive manner. The haze parameters, transmission map and atmospheric light, are first estimated using specific convolutional networks allowing color-cast handling. The estimated parameters are then used to guide our dehazing module, where the estimates are progressively updated by novel convolutional networks. The updating takes place jointly with progressive dehazing by a convolutional network that invokes inter-step dependencies. The joint progressive updating and dehazing gradually modify the haze parameter estimates toward achieving effective dehazing. Through different studies, our dehazing framework is shown to be more effective than image-to-image mapping or predefined haze formation model based dehazing. Our dehazing framework is qualitatively and quantitatively found to outperform the state-of-the-art on synthetic and real-world hazy images of several datasets with varied haze conditions.
translated by 谷歌翻译
Jamdani is the strikingly patterned textile heritage of Bangladesh. The exclusive geometric motifs woven on the fabric are the most attractive part of this craftsmanship having a remarkable influence on textile and fine art. In this paper, we have developed a technique based on the Generative Adversarial Network that can learn to generate entirely new Jamdani patterns from a collection of Jamdani motifs that we assembled, the newly formed motifs can mimic the appearance of the original designs. Users can input the skeleton of a desired pattern in terms of rough strokes and our system finalizes the input by generating the complete motif which follows the geometric structure of real Jamdani ones. To serve this purpose, we collected and preprocessed a dataset containing a large number of Jamdani motifs images from authentic sources via fieldwork and applied a state-of-the-art method called pix2pix to it. To the best of our knowledge, this dataset is currently the only available dataset of Jamdani motifs in digital format for computer vision research. Our experimental results of the pix2pix model on this dataset show satisfactory outputs of computer-generated images of Jamdani motifs and we believe that our work will open a new avenue for further research.
translated by 谷歌翻译
This paper presents SVAM (Sequential Variance-Altered MLE), a unified framework for learning generalized linear models under adversarial label corruption in training data. SVAM extends to tasks such as least squares regression, logistic regression, and gamma regression, whereas many existing works on learning with label corruptions focus only on least squares regression. SVAM is based on a novel variance reduction technique that may be of independent interest and works by iteratively solving weighted MLEs over variance-altered versions of the GLM objective. SVAM offers provable model recovery guarantees superior to the state-of-the-art for robust regression even when a constant fraction of training labels are adversarially corrupted. SVAM also empirically outperforms several existing problem-specific techniques for robust regression and classification. Code for SVAM is available at https://github.com/purushottamkar/svam/
translated by 谷歌翻译
In recent years the importance of Smart Healthcare cannot be overstated. The current work proposed to expand the state-of-art of smart healthcare in integrating solutions for Obsessive Compulsive Disorder (OCD). Identification of OCD from oxidative stress biomarkers (OSBs) using machine learning is an important development in the study of OCD. However, this process involves the collection of OCD class labels from hospitals, collection of corresponding OSBs from biochemical laboratories, integrated and labeled dataset creation, use of suitable machine learning algorithm for designing OCD prediction model, and making these prediction models available for different biochemical laboratories for OCD prediction for unlabeled OSBs. Further, from time to time, with significant growth in the volume of the dataset with labeled samples, redesigning the prediction model is required for further use. The whole process requires distributed data collection, data integration, coordination between the hospital and biochemical laboratory, dynamic machine learning OCD prediction mode design using a suitable machine learning algorithm, and making the machine learning model available for the biochemical laboratories. Keeping all these things in mind, Accu-Help a fully automated, smart, and accurate OCD detection conceptual model is proposed to help the biochemical laboratories for efficient detection of OCD from OSBs. OSBs are classified into three classes: Healthy Individual (HI), OCD Affected Individual (OAI), and Genetically Affected Individual (GAI). The main component of this proposed framework is the machine learning OCD prediction model design. In this Accu-Help, a neural network-based approach is presented with an OCD prediction accuracy of 86 percent.
translated by 谷歌翻译
Cyber intrusion attacks that compromise the users' critical and sensitive data are escalating in volume and intensity, especially with the growing connections between our daily life and the Internet. The large volume and high complexity of such intrusion attacks have impeded the effectiveness of most traditional defence techniques. While at the same time, the remarkable performance of the machine learning methods, especially deep learning, in computer vision, had garnered research interests from the cyber security community to further enhance and automate intrusion detections. However, the expensive data labeling and limitation of anomalous data make it challenging to train an intrusion detector in a fully supervised manner. Therefore, intrusion detection based on unsupervised anomaly detection is an important feature too. In this paper, we propose a three-stage deep learning anomaly detection based network intrusion attack detection framework. The framework comprises an integration of unsupervised (K-means clustering), semi-supervised (GANomaly) and supervised learning (CNN) algorithms. We then evaluated and showed the performance of our implemented framework on three benchmark datasets: NSL-KDD, CIC-IDS2018, and TON_IoT.
translated by 谷歌翻译
我们介绍了遮阳板,一个新的像素注释的新数据集和一个基准套件,用于在以自我为中心的视频中分割手和活动对象。遮阳板注释Epic-kitchens的视频,其中带有当前视频分割数据集中未遇到的新挑战。具体而言,我们需要确保像素级注释作为对象经历变革性相互作用的短期和长期一致性,例如洋葱被剥皮,切成丁和煮熟 - 我们旨在获得果皮,洋葱块,斩波板,刀,锅以及表演手的准确像素级注释。遮阳板引入了一条注释管道,以零件为ai驱动,以进行可伸缩性和质量。总共,我们公开发布257个对象类的272K手册语义面具,990万个插值密集口罩,67K手动关系,涵盖36小时的179个未修剪视频。除了注释外,我们还引入了视频对象细分,互动理解和长期推理方面的三个挑战。有关数据,代码和排行榜:http://epic-kitchens.github.io/visor
translated by 谷歌翻译
我们提出了一种在异质环境中联合学习的沟通有效方法。在存在$ k $不同的数据分布的情况下,系统异质性反映了,每个用户仅从$ k $分布中的一个中采样数据。所提出的方法只需要在用户和服务器之间进行一次通信,从而大大降低了通信成本。此外,提出的方法通过在样本量方面实现最佳的于点错误(MSE)率,即在异质环境中提供强大的学习保证相同的数据分布,前提是,每个用户的数据点数量高于我们从系统参数方面明确表征的阈值。值得注意的是,这是可以实现的,而无需任何了解基础分布,甚至不需要任何分布数量$ k $。数值实验说明了我们的发现并强调了所提出的方法的性能。
translated by 谷歌翻译
我们提出了多语言数据集的Multiconer,用于命名实体识别,涵盖11种语言的3个域(Wiki句子,问题和搜索查询),以及多语言和代码混合子集。该数据集旨在代表NER中的当代挑战,包括低文字方案(简短和未添加的文本),句法复杂的实体(例如电影标题)和长尾实体分布。使用基于启发式的句子采样,模板提取和插槽以及机器翻译等技术,从公共资源中汇编了26M令牌数据集。我们在数据集上应用了两个NER模型:一个基线XLM-Roberta模型和一个最先进的Gemnet模型,该模型利用了Gazetteers。基线实现了中等的性能(Macro-F1 = 54%),突出了我们数据的难度。 Gemnet使用Gazetteers,显着改善(Macro-F1 =+30%的平均改善)。甚至对于大型预训练的语言模型,多功能人也会构成挑战,我们认为它可以帮助进一步研究建立强大的NER系统。 Multiconer可在https://registry.opendata.aws/multiconer/上公开获取,我们希望该资源将有助于推进NER各个方面的研究。
translated by 谷歌翻译
在本文中,提出了针对动力学不确定性的机器人操纵器提出的人工延迟阻抗控制器。控制定律将超级扭曲算法(STA)类型的二阶切换控制器通过新颖的广义过滤跟踪误差(GFTE)统一延迟估计(TDE)框架。虽然时间延迟的估计框架可以通过估算不确定的机器人动力学和相互作用力来从状态和控制工作的近期数据中估算不确定的机器人动力学和相互作用力来准确建模机器人动力学,但外部循环中的第二阶切换控制法可以在时间延迟估计的情况下提供稳健性(TDE)由于操纵器动力学的近似而引起的误差。因此,拟议的控制定律试图在机器人最终效应变量之间建立所需的阻抗模型,即在存在不确定性的情况下,在遇到平滑接触力和自由运动期间的力和运动。使用拟议的控制器以及收敛分析的两个链接操纵器的仿真结果显示出验证命题。
translated by 谷歌翻译